24 research outputs found

    Geometric Ergodicity of Gibbs Samplers in Bayesian Penalized Regression Models

    Full text link
    We consider three Bayesian penalized regression models and show that the respective deterministic scan Gibbs samplers are geometrically ergodic regardless of the dimension of the regression problem. We prove geometric ergodicity of the Gibbs samplers for the Bayesian fused lasso, the Bayesian group lasso, and the Bayesian sparse group lasso. Geometric ergodicity along with a moment condition results in the existence of a Markov chain central limit theorem for Monte Carlo averages and ensures reliable output analysis. Our results of geometric ergodicity allow us to also provide default starting values for the Gibbs samplers

    Revisiting the Gelman-Rubin Diagnostic

    Full text link
    Gelman and Rubin's (1992) convergence diagnostic is one of the most popular methods for terminating a Markov chain Monte Carlo (MCMC) sampler. Since the seminal paper, researchers have developed sophisticated methods for estimating variance of Monte Carlo averages. We show that these estimators find immediate use in the Gelman-Rubin statistic, a connection not previously established in the literature. We incorporate these estimators to upgrade both the univariate and multivariate Gelman-Rubin statistics, leading to improved stability in MCMC termination time. An immediate advantage is that our new Gelman-Rubin statistic can be calculated for a single chain. In addition, we establish a one-to-one relationship between the Gelman-Rubin statistic and effective sample size. Leveraging this relationship, we develop a principled termination criterion for the Gelman-Rubin statistic. Finally, we demonstrate the utility of our improved diagnostic via examples

    Multivariate strong invariance principles in Markov chain Monte Carlo

    Full text link
    Strong invariance principles in Markov chain Monte Carlo are crucial to theoretically grounded output analysis. Using the wide-sense regenerative nature of the process, we obtain explicit bounds in the strong invariance converging rates for partial sums of multivariate ergodic Markov chains. Consequently, we present results on the existence of strong invariance principles for both polynomially and geometrically ergodic Markov chains without requiring a 1-step minorization condition. Our tight and explicit rates have a direct impact on output analysis, as it allows the verification of important conditions in the strong consistency of certain variance estimators

    A principled stopping rule for importance sampling

    Full text link
    Importance sampling (IS) is a Monte Carlo technique that relies on weighted samples, simulated from a proposal distribution, to estimate intractable integrals. The quality of the estimators improves with the number of samples. However, for achieving a desired quality of estimation, the required number of samples is unknown and depends on the quantity of interest, the estimator, and the chosen proposal. We present a sequential stopping rule that terminates simulation when the overall variability in estimation is relatively small. The proposed methodology closely connects to the idea of an effective sample size in IS and overcomes crucial shortcomings of existing metrics, e.g., it acknowledges multivariate estimation problems. Our stopping rule retains asymptotic guarantees and provides users a clear guideline on when to stop the simulation in IS
    corecore